Skip to content

Fix: Support Sequential models in Grad-CAM example#2244

Open
jaytiwarihub wants to merge 4 commits intokeras-team:masterfrom
jaytiwarihub:fix-gradcam-sequential
Open

Fix: Support Sequential models in Grad-CAM example#2244
jaytiwarihub wants to merge 4 commits intokeras-team:masterfrom
jaytiwarihub:fix-gradcam-sequential

Conversation

@jaytiwarihub
Copy link

This PR fixes a compatibility issue in examples/vision/grad_cam.py where the make_gradcam_heatmap function would fail for Sequential models.

Currently, the function uses model.inputs to construct the gradient model. While this works for Functional API models (like Xception in the example), it often raises errors or behaves unexpectedly with simple Sequential models that haven't explicitly built their graph with an InputLayer.

Changes:

Changed model.inputs to model.input in make_gradcam_heatmap.

This ensures the function works robustly for both Sequential and Functional models, as model.input is the standard way to access the single input tensor for image classifiers.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jaytiwarihub, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical compatibility issue within the Grad-CAM example, ensuring that the visualization technique can be applied consistently across different Keras model architectures. By adjusting how the input tensor is accessed, the example now robustly supports both Functional API and simpler Sequential models, broadening its utility and preventing common errors for users.

Highlights

  • Grad-CAM Sequential Model Compatibility: The make_gradcam_heatmap function in the Grad-CAM example now correctly supports Keras Sequential models by using model.input instead of model.inputs when constructing the gradient model. This resolves an issue where model.inputs would fail or behave unexpectedly for Sequential models without an explicit InputLayer.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a compatibility issue with Keras Sequential models in the Grad-CAM example by switching from model.inputs to model.input. This change enhances the robustness of the example, making it work for both Functional and Sequential models. I've added one minor comment regarding code formatting.

Comment on lines 73 to 75
grad_model = keras.models.Model(
model.input, [model.get_layer(last_conv_layer_name).output, model.output]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The core change from model.inputs to model.input is correct and improves compatibility. However, there's a minor indentation issue introduced. The grad_model variable is indented with 3 spaces instead of the standard 4, which violates PEP 8 formatting guidelines. I've suggested a fix to correct the indentation.

Suggested change
grad_model = keras.models.Model(
model.input, [model.get_layer(last_conv_layer_name).output, model.output]
)
grad_model = keras.models.Model(
model.input, [model.get_layer(last_conv_layer_name).output, model.output]
)

@divyashreepathihalli
Copy link
Collaborator

A corresponding .md file will need to be generated. Instructions here - https://github.com/keras-team/keras-io?tab=readme-ov-file#adding-a-new-code-example

@jaytiwarihub
Copy link
Author

jaytiwarihub commented Jan 22, 2026

@divyashreepathihalli Thanks for the review! I have updated the grad_cam.md file as requested. I also included a small fix in scripts/tutobooks.py. The autogen script was crashing on Windows due to encoding errors (UnicodeDecodeError), so I added explicit encoding='utf-8' handling to fix it. Everything is syncing correctly now.

Copy link
Collaborator

@sachinprasadhs sachinprasadhs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need for this change, since Xception has a single input, all the models created by keras.applications usually returns single input which is cleaner and works well here.

@jaytiwarihub
Copy link
Author

@sachinprasadhs My intention with this change was to make make_gradcam_heatmap more robust. Many users copy-paste this function to debug their own custom models, and if they use a Sequential model, the current implementation fails because of how inputs are handled.
However, I understand that for the purpose of this specific tutorial (which uses Xception), the extra logic might be unnecessary complexity.
Would you prefer I revert this specific input-handling change and keep the function strictly optimized for Functional/Single-Input models?

@github-actions
Copy link

This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.

@github-actions github-actions bot added the stale label Feb 14, 2026
@github-actions github-actions bot removed the stale label Feb 15, 2026
@github-actions
Copy link

github-actions bot commented Mar 1, 2026

This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.

@github-actions github-actions bot added the stale label Mar 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants